Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28.360
Filtrar
1.
J Comp Eff Res ; 13(5): e230085, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38567965

RESUMO

Aim: The first objective is to compare the performance of two-stage residual inclusion (2SRI), two-stage least square (2SLS) with the multivariable generalized linear model (GLM) in terms of the reducing unmeasured confounding bias. The second objective is to demonstrate the ability of 2SRI and 2SPS in alleviating unmeasured confounding when noncollapsibility exists. Materials & methods: This study comprises a simulation study and an empirical example from a real-world UK population health dataset (Clinical Practice Research Datalink). The instrumental variable (IV) used is based on physicians' prescribing preferences (defined by prescribing history). Results: The percent bias of 2SRI in terms of treatment effect estimates to be lower than GLM and 2SPS and was less than 15% in most scenarios. Further, 2SRI was found to be robust to mild noncollapsibility with the percent bias less than 50%. As the level of unmeasured confounding increased, the ability to alleviate the noncollapsibility decreased. Strong IVs tended to be more robust to noncollapsibility than weak IVs. Conclusion: 2SRI tends to be less biased than GLM and 2SPS in terms of estimating treatment effect. It can be robust to noncollapsibility in the case of the mild unmeasured confounding effect.


Assuntos
Fatores de Confusão Epidemiológicos , Padrões de Prática Médica , Humanos , Padrões de Prática Médica/estatística & dados numéricos , Viés , Modelos Lineares , Análise dos Mínimos Quadrados , Reino Unido , Simulação por Computador
2.
Genome Biol ; 25(1): 101, 2024 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-38641647

RESUMO

Many bioinformatics methods seek to reduce reference bias, but no methods exist to comprehensively measure it. Biastools analyzes and categorizes instances of reference bias. It works in various scenarios: when the donor's variants are known and reads are simulated; when donor variants are known and reads are real; and when variants are unknown and reads are real. Using biastools, we observe that more inclusive graph genomes result in fewer biased sites. We find that end-to-end alignment reduces bias at indels relative to local aligners. Finally, we use biastools to characterize how T2T references improve large-scale bias.


Assuntos
Genoma , Genômica , Genômica/métodos , Biologia Computacional , Mutação INDEL , Viés , Análise de Sequência de DNA/métodos , Software , Sequenciamento de Nucleotídeos em Larga Escala/métodos
3.
Proc Natl Acad Sci U S A ; 121(16): e2317602121, 2024 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-38598346

RESUMO

Algorithmic bias occurs when algorithms incorporate biases in the human decisions on which they are trained. We find that people see more of their biases (e.g., age, gender, race) in the decisions of algorithms than in their own decisions. Research participants saw more bias in the decisions of algorithms trained on their decisions than in their own decisions, even when those decisions were the same and participants were incentivized to reveal their true beliefs. By contrast, participants saw as much bias in the decisions of algorithms trained on their decisions as in the decisions of other participants and algorithms trained on the decisions of other participants. Cognitive psychological processes and motivated reasoning help explain why people see more of their biases in algorithms. Research participants most susceptible to bias blind spot were most likely to see more bias in algorithms than self. Participants were also more likely to perceive algorithms than themselves to have been influenced by irrelevant biasing attributes (e.g., race) but not by relevant attributes (e.g., user reviews). Because participants saw more of their biases in algorithms than themselves, they were more likely to make debiasing corrections to decisions attributed to an algorithm than to themselves. Our findings show that bias is more readily perceived in algorithms than in self and suggest how to use algorithms to reveal and correct biased human decisions.


Assuntos
Motivação , Resolução de Problemas , Humanos , Viés , Algoritmos
4.
Stat Methods Med Res ; 33(5): 909-927, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38567439

RESUMO

Understanding whether and how treatment effects vary across subgroups is crucial to inform clinical practice and recommendations. Accordingly, the assessment of heterogeneous treatment effects based on pre-specified potential effect modifiers has become a common goal in modern randomized trials. However, when one or more potential effect modifiers are missing, complete-case analysis may lead to bias and under-coverage. While statistical methods for handling missing data have been proposed and compared for individually randomized trials with missing effect modifier data, few guidelines exist for the cluster-randomized setting, where intracluster correlations in the effect modifiers, outcomes, or even missingness mechanisms may introduce further threats to accurate assessment of heterogeneous treatment effect. In this article, the performance of several missing data methods are compared through a simulation study of cluster-randomized trials with continuous outcome and missing binary effect modifier data, and further illustrated using real data from the Work, Family, and Health Study. Our results suggest that multilevel multiple imputation and Bayesian multilevel multiple imputation have better performance than other available methods, and that Bayesian multilevel multiple imputation has lower bias and closer to nominal coverage than standard multilevel multiple imputation when there are model specification or compatibility issues.


Assuntos
Teorema de Bayes , Ensaios Clínicos Controlados Aleatórios como Assunto , Ensaios Clínicos Controlados Aleatórios como Assunto/estatística & dados numéricos , Humanos , Análise por Conglomerados , Interpretação Estatística de Dados , Viés , Modelos Estatísticos , Resultado do Tratamento , Simulação por Computador , 60534
5.
J Comp Eff Res ; 13(5): e230044, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38567966

RESUMO

Aim: This simulation study is to assess the utility of physician's prescribing preference (PPP) as an instrumental variable for moderate and smaller sample sizes. Materials & methods: We designed a simulation study to imitate a comparative effectiveness research under different sample sizes. We compare the performance of instrumental variable (IV) and non-IV approaches using two-stage least squares (2SLS) and ordinary least squares (OLS) methods, respectively. Further, we test the performance of different forms of proxies for PPP as an IV. Results: The percent bias of 2SLS is around approximately 20%, while the percent bias of OLS is close to 60%. The sample size is not associated with the level of bias for the PPP IV approach. Conclusion: Irrespective of sample size, the PPP IV approach leads to less biased estimates of treatment effectiveness than OLS adjusting for known confounding only. Particularly for smaller sample sizes, we recommend constructing PPP from long prescribing histories to improve statistical power.


Assuntos
Pesquisa Comparativa da Efetividade , Simulação por Computador , Padrões de Prática Médica , Humanos , Pesquisa Comparativa da Efetividade/métodos , Tamanho da Amostra , Padrões de Prática Médica/estatística & dados numéricos , Análise dos Mínimos Quadrados , Viés
6.
Epidemiology ; 35(3): 349-358, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38630509

RESUMO

Accurate outcome and exposure ascertainment in electronic health record (EHR) data, referred to as EHR phenotyping, relies on the completeness and accuracy of EHR data for each individual. However, some individuals, such as those with a greater comorbidity burden, visit the health care system more frequently and thus have more complete data, compared with others. Ignoring such dependence of exposure and outcome misclassification on visit frequency can bias estimates of associations in EHR analysis. We developed a framework for describing the structure of outcome and exposure misclassification due to informative visit processes in EHR data and assessed the utility of a quantitative bias analysis approach to adjusting for bias induced by informative visit patterns. Using simulations, we found that this method produced unbiased estimates across all informative visit structures, if the phenotype sensitivity and specificity were correctly specified. We applied this method in an example where the association between diabetes and progression-free survival in metastatic breast cancer patients may be subject to informative presence bias. The quantitative bias analysis approach allowed us to evaluate robustness of results to informative presence bias and indicated that findings were unlikely to change across a range of plausible values for phenotype sensitivity and specificity. Researchers using EHR data should carefully consider the informative visit structure reflected in their data and use appropriate approaches such as the quantitative bias analysis approach described here to evaluate robustness of study findings.


Assuntos
Neoplasias da Mama , Registros Eletrônicos de Saúde , Humanos , Feminino , Projetos de Pesquisa , Viés , Cognição
7.
Genet Sel Evol ; 56(1): 30, 2024 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-38632535

RESUMO

BACKGROUND: Breeding queens may be mated with drones that are produced by a single drone-producing queen (DPQ), or a group of sister-DPQs, but often only the dam of the DPQ(s) is reported in the pedigree. Furthermore, datasets may include colony phenotypes from DPQs that were open-mated at different locations, and thus to a heterogeneous drone population. METHODS: Simulation was used to investigate the impact of the mating strategy and its modelling on the estimates of genetic parameters and genetic trends when the DPQs are treated in different ways in the statistical evaluation model. We quantified the bias and standard error of the estimates when breeding queens were mated to one DPQ or a group of DPQs, assuming that this information was known or not. We also investigated four alternative strategies to accommodate the phenotypes of open-mated DPQs in the genetic evaluation: excluding their phenotypes, adding a dummy pseudo-sire in the pedigree, or adding a non-genetic (fixed or random) effect to the statistical evaluation model to account for the origin of the mates. RESULTS: The most precise estimates of genetic parameters and genetic trends were obtained when breeding queens were mated with drones of single DPQs that are correctly assigned in the pedigree. However, when they were mated with drones from one or a group of DPQs, and this information was not known, erroneous assumptions led to considerable bias in these estimates. Furthermore, genetic variances were considerably overestimated when phenotypes of colonies from open-mated DPQs were adjusted for their mates by adding a dummy pseudo-sire in the pedigree for each subpopulation of open-mating drones. On the contrary, correcting for the heterogeneous drone population by adding a non-genetic effect in the evaluation model produced unbiased estimates. CONCLUSIONS: Knowing only the dam of the DPQ(s) used in each mating may lead to erroneous assumptions on how DPQs were used and severely bias the estimates of genetic parameters and trends. Thus, we recommend keeping track of DPQs in the pedigree, and not only of the dams of DPQ(s). Records from DPQ colonies with queens open-mated to a heterogeneous drone population can be integrated by adding non-genetic effects to the statistical evaluation model.


Assuntos
Reprodução , Abelhas , Animais , Incerteza , Fenótipo , Simulação por Computador , Viés
8.
Nat Med ; 30(4): 1174-1190, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38641744

RESUMO

Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance, potentially leading to biases. This concern is all the more important as computational pathology has leveraged large public datasets that underrepresent certain demographic groups. Using publicly available data from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, as well as internal patient data, we show that whole-slide image classification models display marked performance disparities across different demographic groups when used to subtype breast and lung carcinomas and to predict IDH1 mutations in gliomas. For example, when using common modeling approaches, we observed performance gaps (in area under the receiver operating characteristic curve) between white and Black patients of 3.0% for breast cancer subtyping, 10.9% for lung cancer subtyping and 16.0% for IDH1 mutation prediction in gliomas. We found that richer feature representations obtained from self-supervised vision foundation models reduce performance variations between groups. These representations provide improvements upon weaker models even when those weaker models are combined with state-of-the-art bias mitigation strategies and modeling choices. Nevertheless, self-supervised vision foundation models do not fully eliminate these discrepancies, highlighting the continuing need for bias mitigation efforts in computational pathology. Finally, we demonstrate that our results extend to other demographic factors beyond patient race. Given these findings, we encourage regulatory and policy agencies to integrate demographic-stratified evaluation into their assessment guidelines.


Assuntos
Glioma , Neoplasias Pulmonares , Humanos , Viés , População Negra , Glioma/diagnóstico , Glioma/genética , Erros de Diagnóstico , Demografia
10.
Stat Med ; 43(9): 1671-1687, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38634251

RESUMO

We consider estimation of the semiparametric additive hazards model with an unspecified baseline hazard function where the effect of a continuous covariate has a specific shape but otherwise unspecified. Such estimation is particularly useful for a unimodal hazard function, where the hazard is monotone increasing and monotone decreasing with an unknown mode. A popular approach of the proportional hazards model is limited in such setting due to the complicated structure of the partial likelihood. Our model defines a quadratic loss function, and its simple structure allows a global Hessian matrix that does not involve parameters. Thus, once the global Hessian matrix is computed, a standard quadratic programming method can be applicable by profiling all possible locations of the mode. However, the quadratic programming method may be inefficient to handle a large global Hessian matrix in the profiling algorithm due to a large dimensionality, where the dimension of the global Hessian matrix and number of hypothetical modes are the same order as the sample size. We propose the quadratic pool adjacent violators algorithm to reduce computational costs. The proposed algorithm is extended to the model with a time-dependent covariate with monotone or U-shape hazard function. In simulation studies, our proposed method improves computational speed compared to the quadratic programming method, with bias and mean square error reductions. We analyze data from a recent cardiovascular study.


Assuntos
Algoritmos , Humanos , Modelos de Riscos Proporcionais , Simulação por Computador , Probabilidade , Viés , Funções Verossimilhança
12.
Radiographics ; 44(5): e230067, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38635456

RESUMO

Artificial intelligence (AI) algorithms are prone to bias at multiple stages of model development, with potential for exacerbating health disparities. However, bias in imaging AI is a complex topic that encompasses multiple coexisting definitions. Bias may refer to unequal preference to a person or group owing to preexisting attitudes or beliefs, either intentional or unintentional. However, cognitive bias refers to systematic deviation from objective judgment due to reliance on heuristics, and statistical bias refers to differences between true and expected values, commonly manifesting as systematic error in model prediction (ie, a model with output unrepresentative of real-world conditions). Clinical decisions informed by biased models may lead to patient harm due to action on inaccurate AI results or exacerbate health inequities due to differing performance among patient populations. However, while inequitable bias can harm patients in this context, a mindful approach leveraging equitable bias can address underrepresentation of minority groups or rare diseases. Radiologists should also be aware of bias after AI deployment such as automation bias, or a tendency to agree with automated decisions despite contrary evidence. Understanding common sources of imaging AI bias and the consequences of using biased models can guide preventive measures to mitigate its impact. Accordingly, the authors focus on sources of bias at stages along the imaging machine learning life cycle, attempting to simplify potentially intimidating technical terminology for general radiologists using AI tools in practice or collaborating with data scientists and engineers for AI tool development. The authors review definitions of bias in AI, describe common sources of bias, and present recommendations to guide quality control measures to mitigate the impact of bias in imaging AI. Understanding the terms featured in this article will enable a proactive approach to identifying and mitigating bias in imaging AI. Published under a CC BY 4.0 license. Test Your Knowledge questions for this article are available in the supplemental material. See the invited commentary by Rouzrokh and Erickson in this issue.


Assuntos
Algoritmos , Inteligência Artificial , Humanos , Automação , Aprendizado de Máquina , Viés
14.
PLoS One ; 19(4): e0300881, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38557691

RESUMO

BACKGROUND: Orthodontic systematic reviews (SRs) include studies published mostly in English than non-English languages. Including only English studies in SRs may result in a language bias. This meta-epidemiological study aimed to evaluate the language bias impact on orthodontic SRs. DATA SOURCE: SRs published in high-impact orthodontic journals between 2017 and 2021 were retrieved through an electronic search of PubMed in June 2022. Additionally, Cochrane oral health group was searched for orthodontic systematic reviews published in the same period. DATA COLLECTION AND ANALYSIS: Study selection and data extraction were performed by two authors. Multivariable logistic regression was implemented to explore the association of including non-English studies with the SRs characteristics. For the meta-epidemiological analysis, one meta-analysis from each SRs with at least three trials, including one non-English trial was extracted. The average difference in SMD was obtained using a random-effects meta-analysis. RESULTS: 174 SRs were included in this study. Almost one-quarter (n = 45/174, 26%) of these SRs included at least one non-English study. The association between SRs characteristics and including non-English studies was not statistically significant except for the restriction on language: the odds of including non-English studies reduced by 89% in SRs with a language restriction (OR: 0.11, 95%CI: 0.01 0.55, P< 0.01). Out of the sample, only fourteen meta-analyses were included in the meta-epidemiological analysis. The meta-epidemiological analysis revealed that non-English studies tended to overestimate the summary SMD by approximately 0.30, but this was not statistically significant when random-effects model was employed due to substantial statistical heterogeneity (ΔSMD = -0.29, 95%CI: -0.63 to 0.05, P = 0.37). As such, the overestimation of meta-analysis results by including non-English studies was statistically non-significant. CONCLUSION: Language bias has non-negligible impact on the results of orthodontic SRs. Orthodontic systematic reviews should abstain from language restrictions and use sensitivity analysis to assess the impact of language on the conclusions, as non-English studies may have a lower quality.


Assuntos
Idioma , Publicações , Estudos Epidemiológicos , Viés
16.
Elife ; 122024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38568193

RESUMO

The differential signaling of multiple FGF ligands through a single fibroblast growth factor (FGF) receptor (FGFR) plays an important role in embryonic development. Here, we use quantitative biophysical tools to uncover the mechanism behind differences in FGFR1c signaling in response to FGF4, FGF8, and FGF9, a process which is relevant for limb bud outgrowth. We find that FGF8 preferentially induces FRS2 phosphorylation and extracellular matrix loss, while FGF4 and FGF9 preferentially induce FGFR1c phosphorylation and cell growth arrest. Thus, we demonstrate that FGF8 is a biased FGFR1c ligand, as compared to FGF4 and FGF9. Förster resonance energy transfer experiments reveal a correlation between biased signaling and the conformation of the FGFR1c transmembrane domain dimer. Our findings expand the mechanistic understanding of FGF signaling during development and bring the poorly understood concept of receptor tyrosine kinase ligand bias into the spotlight.


Assuntos
Fatores de Crescimento de Fibroblastos , Transdução de Sinais , Feminino , Gravidez , Humanos , Ligantes , Fosforilação , Viés , Receptor Tipo 1 de Fator de Crescimento de Fibroblastos/genética
17.
Sci Rep ; 14(1): 7869, 2024 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-38570555

RESUMO

This study investigated the impact of target template variation or consistency on attentional bias in location probability learning. Participants conducted a visual search task to find a heterogeneous shape among a homogeneous set of distractors. The target and distractor shapes were either fixed throughout the experiment (target-consistent group) or unpredictably varied on each trial (target-variant group). The target was often presented in one possible search region, unbeknownst to the participants. When the target template was consistent throughout the biased visual search, spatial attention was persistently biased toward the frequent target location. However, when the target template was inconsistent and varied during the biased search, the spatial bias was attenuated so that attention was less prioritized to a frequent target location. The results suggest that the alternative use of target templates may interfere with the emergence of a persistent spatial bias. The regularity-based spatial bias depends on the number of attentional shifts to the frequent target location, but also on search-relevant contexts.


Assuntos
Atenção , Viés de Atenção , Humanos , Tempo de Reação , Aprendizagem por Probabilidade , Viés
18.
Sci Rep ; 14(1): 7848, 2024 04 03.
Artigo em Inglês | MEDLINE | ID: mdl-38570587

RESUMO

A significant level of stigma and inequality exists in mental healthcare, especially in under-served populations. Inequalities are reflected in the data collected for scientific purposes. When not properly accounted for, machine learning (ML) models learned from data can reinforce these structural inequalities or biases. Here, we present a systematic study of bias in ML models designed to predict depression in four different case studies covering different countries and populations. We find that standard ML approaches regularly present biased behaviors. We also show that mitigation techniques, both standard and our own post-hoc method, can be effective in reducing the level of unfair bias. There is no one best ML model for depression prediction that provides equality of outcomes. This emphasizes the importance of analyzing fairness during model selection and transparent reporting about the impact of debiasing interventions. Finally, we also identify positive habits and open challenges that practitioners could follow to enhance fairness in their models.


Assuntos
Depressão , Hábitos , Humanos , Depressão/diagnóstico , Viés , Instalações de Saúde , Aprendizado de Máquina
19.
Elife ; 122024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38564237

RESUMO

When observers have prior knowledge about the likely outcome of their perceptual decisions, they exhibit robust behavioural biases in reaction time and choice accuracy. Computational modelling typically attributes these effects to strategic adjustments in the criterion amount of evidence required to commit to a choice alternative - usually implemented by a starting point shift - but recent work suggests that expectations may also fundamentally bias the encoding of the sensory evidence itself. Here, we recorded neural activity with EEG while participants performed a contrast discrimination task with valid, invalid, or neutral probabilistic cues across multiple testing sessions. We measured sensory evidence encoding via contrast-dependent steady-state visual-evoked potentials (SSVEP), while a read-out of criterion adjustments was provided by effector-selective mu-beta band activity over motor cortex. In keeping with prior modelling and neural recording studies, cues evoked substantial biases in motor preparation consistent with criterion adjustments, but we additionally found that the cues produced a significant modulation of the SSVEP during evidence presentation. While motor preparation adjustments were observed in the earliest trials, the sensory-level effects only emerged with extended task exposure. Our results suggest that, in addition to strategic adjustments to the decision process, probabilistic information can also induce subtle biases in the encoding of the evidence itself.


Assuntos
Sinais (Psicologia) , Potenciais Evocados Visuais , Humanos , Viés , Simulação por Computador , Probabilidade
20.
Sci Rep ; 14(1): 8613, 2024 04 14.
Artigo em Inglês | MEDLINE | ID: mdl-38616210

RESUMO

Intergroup bias is the tendency for people to inflate positive regard for their in-group and derogate the out-group. Across two online experiments (N = 922) this study revisits the methodological premises of research on language as a window into intergroup bias. Experiment 1 examined (i) whether the valence (positivity) of language production differs when communicating about an in- vs. out-group, and (ii) whether the extent of this bias is influenced by the positivity of input descriptors that were initially presented to participants as examples of how an in-group or out-group characterize themselves. Experiment 2 used the linear diffusion chain method to examine how biases are transmitted through cultural generations. Valence of verbal descriptions were quantified using ratings obtained from a large-scale psycholinguistic database. The findings from Experiment 1 indicated a bias towards employing positive language in describing the in-group (exhibiting in-group favoritism), particularly in cases where the input descriptors were negative. However, there was weak evidence for increased negativity aimed at the out-group (i.e., out-group derogation). The findings from Experiment 2 demonstrated that in-group positivity bias propagated across cultural generations at a higher rate than out-group derogation. The results shed light on the formation and cultural transmission of intergroup bias.


Assuntos
Idioma , Psicolinguística , Humanos , Viés , Bases de Dados Factuais , Difusão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...